Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Rectal cancer segmentation network based on adjacent slice attention fusion
Donglei LAN, Xiaodong WANG, Yu YAO, Xin WANG, Jitao ZHOU
Journal of Computer Applications    2023, 43 (12): 3918-3926.   DOI: 10.11772/j.issn.1001-9081.2023010045
Abstract212)   HTML2)    PDF (2681KB)(110)       Save

Aiming at the problem that the target regions of rectal cancer show different sizes, shapes, textures, and boundary clarity on Magnetic Resonance Imaging (MRI) images, to overcome the individual variability among patients and improve the segmentation accuracy, an Adjacent Slice Attention Fusion Network for rectal cancer segmentation (ASAF-Net) was proposed. Firstly, using High Resolution Network (HRNet) as the backbone network, the high-resolution feature representation was maintained during the feature extraction process, thereby reducing the loss of semantic information and spatial location information. Secondly, the multi-scale contextual semantic information between adjacent slices was fused and enhanced by the Adjacent Slice Attention Fusion (ASAF) module, so that the network was able to learn the spatial features between adjacent slices. Finally, in the decoder, the co-training of Fully Convolutional Network (FCN) and Atrous Spatial Pyramid Pooling (ASPP) segmentation heads was carried out, and the large differences between adjacent slices during training was reduced by adding consistency constraints between adjacent slices as an auxiliary loss. Experimental results show that compared with HRNet, ASAF-Net improves the mean Intersection over Union (IoU) and mean Dice Similarity Coefficient (DSC) by 1.68 and 1.26 percentage points, respectively, and reduces the 95% mean Hausdorff Distance (HD) by 0.91 mm. At the same time, ASAF-Net can achieve better segmentation results in both internal filling and edge prediction of multi-objective target regions in rectal cancer MRI image, and helps to improve physician efficiency in clinical auxiliary diagnosis.

Table and Figures | Reference | Related Articles | Metrics
Review of multi-modal medical image segmentation based on deep learning
Meng DOU, Zhebin CHEN, Xin WANG, Jitao ZHOU, Yu YAO
Journal of Computer Applications    2023, 43 (11): 3385-3395.   DOI: 10.11772/j.issn.1001-9081.2022101636
Abstract1617)   HTML59)    PDF (3904KB)(1466)       Save

Multi-modal medical images can provide clinicians with rich information of target areas (such as tumors, organs or tissues). However, effective fusion and segmentation of multi-modal images is still a challenging problem due to the independence and complementarity of multi-modal images. Traditional image fusion methods have difficulty in addressing this problem, leading to widespread research on deep learning-based multi-modal medical image segmentation algorithms. The multi-modal medical image segmentation task based on deep learning was reviewed in terms of principles, techniques, problems, and prospects. Firstly, the general theory of deep learning and multi-modal medical image segmentation was introduced, including the basic principles and development processes of deep learning and Convolutional Neural Network (CNN), as well as the importance of the multi-modal medical image segmentation task. Secondly, the key concepts of multi-modal medical image segmentation was described, including data dimension, preprocessing, data enhancement, loss function, and post-processing, etc. Thirdly, different multi-modal segmentation networks based on different fusion strategies were summarized and analyzed. Finally, several common problems in medical image segmentation were discussed, the summary and prospects for future research were given.

Table and Figures | Reference | Related Articles | Metrics
Session recommendation method based on graph model and attention model
Weichao DANG, Zhiyu YAO, Shangwang BAI, Gaimei GAO, Chunxia LIU
Journal of Computer Applications    2022, 42 (11): 3610-3616.   DOI: 10.11772/j.issn.1001-9081.2021091696
Abstract272)   HTML5)    PDF (1175KB)(103)       Save

To solve the problem that representation of interest preferences based on the Recurrent Neural Network (RNN) is incomplete and inaccurate in session recommendation, a Session Recommendation method based on Graph Model and Attention Model (SR?GM?AM) was proposed. Firstly, the graph model used global graph and session graph to obtain neighborhood information and session information respectively, and used Graph Neural Network (GNN) to extract item graph features, which were passed through the global item representation layer and session item representation layer to obtain the global? level embedding and the session?level embedding, and the two levels of embedding were combined into graph embedding. Then, attention model used soft attention to fuse graph embedding and reverse position embedding, target attention activated the relevance of the target items, as well as attention model generated session embedding through linear transformation. Finally, SR?GM?AM outputted the recommended list of the N items for the next click through the prediction layer. Comparative experiments of SR?GM?AM and Lossless Edge?order preserving aggregation and Shortcut graph attention for Session?based Recommendation (LESSR) were conducted on two real public e?commerce datasets Yoochoose and Diginetica, and the results showed that SR?GM?AM had the highest P@20 of 72.41% and MRR@20 of 35.34%, verifying the effectiveness of it.

Table and Figures | Reference | Related Articles | Metrics